减少源和目标域之间的表示形式差异是最大化模型概括的关键组件。在这项工作中,我们倡导利用自然语言监督域的概括任务。我们将两个模块介绍给地面视觉表示,其中包含人类典型推理的文本:(1)视觉和文本关节嵌入器以及(2)文本解释发生器。前者学习图像文本的关节嵌入空间,我们可以将高级类别歧视性信息接地到模型中。后者利用了一个可解释的模型,并生成了解释,证明其决定背后的理由是合理的。据我们所知,这是为域泛化任务利用视觉和语言跨模式方法的第一项工作。我们使用新创建的CUB-DG基准数据集进行的实验表明,可以成功地将跨模式监督用于接地域不变的视觉表示并改善模型的概括。此外,在大规模域基准测试中,我们提出的方法可实现最先进的结果,并在五个多域数据集的平均性能中排名第一。数据集和代码可在https://github.com/mswzeus/gvrt上找到。
translated by 谷歌翻译
我们表明,没有图形特异性修改的标准变压器可以在理论和实践中都带来图形学习的有希望的结果。鉴于图,我们只是将所有节点和边缘视为独立的令牌,用令牌嵌入增强它们,然后将它们馈入变压器。有了适当的令牌嵌入选择,我们证明这种方法在理论上至少与不变的图形网络(2-ign)一样表达,由等效线性层组成,它已经比所有消息传播的图形神经网络(GNN)更具表现力)。当在大规模图数据集(PCQM4MV2)上接受训练时,与具有精致的图形特异性电感偏置相比,与GNN基准相比,与GNN基准相比,与GNN基准相比,与GNN基准相比,我们创造的令牌化图形变压器(Tokengt)取得了明显更好的结果。我们的实施可从https://github.com/jw9730/tokengt获得。
translated by 谷歌翻译
通过将其与监督学习框架相结合,我们改善了最近开发的神经元,是一种基于神经网络的自适应离散的丹机。即,我们通过基于给定嘈杂的数据受到去噪的给定嘈杂的数据来使神经调整的监督训练兼容。结果,与香草神经元伙子相比,我们实现了显着的去噪能力,这只需要采用随机初始化参数的自适应微调步骤。此外,我们示出了自适应微调使得算法稳健使得噪声错配或盲目训练的监督模型仍然可以实现匹配模型的性能。此外,我们制作一些算法的进步,使神经伙伴更可扩展,并处理具有更大字母大小的多维数据或数据。我们系统地显示了我们对两个非常多元化的数据集,二值图像和DNA序列的改进。
translated by 谷歌翻译
以前的无监督句子嵌入研究集中在数据增强方法上,例如辍学和基于规则的句子转换方法。但是,这些方法限制了控制句子增强观点的细粒语义。这导致监督信号不足以捕获类似句子的语义相似性。在这项工作中,我们发现使用邻居句子可以捕获相似句子之间更准确的语义相似性。基于这一发现,我们提出了RankEncoder,该发现使用了输入句子和语料库中的句子之间的关系来训练无监督的句子编码器。我们从三个角度评估rankencoder:1)语义文本相似性性能,2)相似句子对的功效,以及3)rankencoder的普遍性。实验结果表明,与先前的最新性能相比,Rankencoder达到80.07 \%Spearman的相关性,绝​​对提高了1.1%。在类似的句子对上,改进更加显着,改善了1.73%。另外,我们证明了RankEncoder普遍适用于现有的无监督句子编码器。
translated by 谷歌翻译
The growing interest in intelligent services and privacy protection for mobile devices has given rise to the widespread application of federated learning in Multi-access Edge Computing (MEC). Diverse user behaviors call for personalized services with heterogeneous Machine Learning (ML) models on different devices. Federated Multi-task Learning (FMTL) is proposed to train related but personalized ML models for different devices, whereas previous works suffer from excessive communication overhead during training and neglect the model heterogeneity among devices in MEC. Introducing knowledge distillation into FMTL can simultaneously enable efficient communication and model heterogeneity among clients, whereas existing methods rely on a public dataset, which is impractical in reality. To tackle this dilemma, Federated MultI-task Distillation for Multi-access Edge CompuTing (FedICT) is proposed. FedICT direct local-global knowledge aloof during bi-directional distillation processes between clients and the server, aiming to enable multi-task clients while alleviating client drift derived from divergent optimization directions of client-side local models. Specifically, FedICT includes Federated Prior Knowledge Distillation (FPKD) and Local Knowledge Adjustment (LKA). FPKD is proposed to reinforce the clients' fitting of local data by introducing prior knowledge of local data distributions. Moreover, LKA is proposed to correct the distillation loss of the server, making the transferred local knowledge better match the generalized representation. Experiments on three datasets show that FedICT significantly outperforms all compared benchmarks in various data heterogeneous and model architecture settings, achieving improved accuracy with less than 1.2% training communication overhead compared with FedAvg and no more than 75% training communication round compared with FedGKT.
translated by 谷歌翻译
In this paper we study the smooth strongly convex minimization problem $\min_{x}\min_y f(x,y)$. The existing optimal first-order methods require $\mathcal{O}(\sqrt{\max\{\kappa_x,\kappa_y\}} \log 1/\epsilon)$ of computations of both $\nabla_x f(x,y)$ and $\nabla_y f(x,y)$, where $\kappa_x$ and $\kappa_y$ are condition numbers with respect to variable blocks $x$ and $y$. We propose a new algorithm that only requires $\mathcal{O}(\sqrt{\kappa_x} \log 1/\epsilon)$ of computations of $\nabla_x f(x,y)$ and $\mathcal{O}(\sqrt{\kappa_y} \log 1/\epsilon)$ computations of $\nabla_y f(x,y)$. In some applications $\kappa_x \gg \kappa_y$, and computation of $\nabla_y f(x,y)$ is significantly cheaper than computation of $\nabla_x f(x,y)$. In this case, our algorithm substantially outperforms the existing state-of-the-art methods.
translated by 谷歌翻译
Feature transformation for AI is an essential task to boost the effectiveness and interpretability of machine learning (ML). Feature transformation aims to transform original data to identify an optimal feature space that enhances the performances of a downstream ML model. Existing studies either combines preprocessing, feature selection, and generation skills to empirically transform data, or automate feature transformation by machine intelligence, such as reinforcement learning. However, existing studies suffer from: 1) high-dimensional non-discriminative feature space; 2) inability to represent complex situational states; 3) inefficiency in integrating local and global feature information. To fill the research gap, we formulate the feature transformation task as an iterative, nested process of feature generation and selection, where feature generation is to generate and add new features based on original features, and feature selection is to remove redundant features to control the size of feature space. Finally, we present extensive experiments and case studies to illustrate 24.7\% improvements in F1 scores compared with SOTAs and robustness in high-dimensional data.
translated by 谷歌翻译
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH). However, most current quality assessment research focuses on evaluating static 3D models and usually ignores motion distortions. Therefore, in this paper, we construct a large-scale dynamic digital human quality assessment (DDH-QA) database with diverse motion content as well as multiple distortions to comprehensively study the perceptual quality of DDHs. Both model-based distortion (noise, compression) and motion-based distortion (binding error, motion unnaturalness) are taken into consideration. Ten types of common motion are employed to drive the DDHs and a total of 800 DDHs are generated in the end. Afterward, we render the video sequences of the distorted DDHs as the evaluation media and carry out a well-controlled subjective experiment. Then a benchmark experiment is conducted with the state-of-the-art video quality assessment (VQA) methods and the experimental results show that existing VQA methods are limited in assessing the perceptual loss of DDHs. The database will be made publicly available to facilitate future research.
translated by 谷歌翻译
Recently, over-height vehicle strike frequently occurs, causing great economic cost and serious safety problems. Hence, an alert system which can accurately discover any possible height limiting devices in advance is necessary to be employed in modern large or medium sized cars, such as touring cars. Detecting and estimating the height limiting devices act as the key point of a successful height limit alert system. Though there are some works research height limit estimation, existing methods are either too computational expensive or not accurate enough. In this paper, we propose a novel stereo-based pipeline named SHLE for height limit estimation. Our SHLE pipeline consists of two stages. In stage 1, a novel devices detection and tracking scheme is introduced, which accurately locate the height limit devices in the left or right image. Then, in stage 2, the depth is temporally measured, extracted and filtered to calculate the height limit device. To benchmark the height limit estimation task, we build a large-scale dataset named "Disparity Height", where stereo images, pre-computed disparities and ground-truth height limit annotations are provided. We conducted extensive experiments on "Disparity Height" and the results show that SHLE achieves an average error below than 10cm though the car is 70m away from the devices. Our method also outperforms all compared baselines and achieves state-of-the-art performance. Code is available at https://github.com/Yang-Kaixing/SHLE.
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译